16 September 2022 | Techdirt

2022-09-24 00:24:45 By : Mr. jinrong wu

Racism is a human problem. When that problem wears badges, carries guns, and has the power to deprive people of life and liberty, it’s a much more serious problem.

Many US law enforcement agencies have racist roots, agencies formed for the purpose of catching escaped slaves to return them to their white owners. Not every cop is a racist, but modernizing police tactics hasn’t managed to strip agencies of long-existing biases that continue to be displayed by officers. Instead, it has turned bigotry into something that mimics science — a combination of data and AI some people believe is free of human weaknesses. Instead, it’s usually AI trained on biased data performing analysis of even more biased data to come to conclusions indistinguishable from the hunches of average racist cops: i.e., let’s send more cops into neighborhoods containing poor minorities.

As law enforcement agencies seek to address the number of problems they’re responsible for (like the destruction of the community’s trust), they’re making changes. Sometimes these changes are forced on them by consent decrees or lawsuit settlements. Sometimes it just seems like the right thing to do.

Bias training is one of the efforts. There are good ways to perform this training. And then there’s the way the Florida Department of Law Enforcement has chosen to do it. Something that requires immersion and effort to reach a genuine understanding of implicit biases and the harm they cause has been reduced to slideshow and a short quiz that bypasses the difficult questions officers need to ask themselves and suggests there’s no need for any agency in the state to detail the demographic info of people stopped by cops.

This report from the Tampa Bay Times details just some of the very questionable aspects of the state’s “will this do” anti-bias training.

The Florida Department of Law Enforcement slideshow told officers that traffic stops preceded “nearly every serious race riot in the United States” but provided no details about the police brutality that accompanied the stops. It inaccurately described police interactions that led to riots in Miami and Los Angeles. It cited statistics from nearly 20 years ago that showed higher public confidence in police than is felt today.

And it took roughly 25 minutes to flip through the slides and complete a quiz that one expert called “embarrassingly simple.”

The description of the traffic stop leading the 1980 riots are glossed over completely.

The training simply said [Arthur] McDuffie “eluded arrest and after an 8-minute chase, he died at the scene after a physical confrontation.”

Followed by: “This resulted in a riot that included buildings being burned.”

What actually happened was McDuffie stopped voluntarily and gave up. At that point, up to a dozen officers began choking and beating him, using their batons and flashlights. One officer drove a squad car over McDuffie’s bike to make it look like it had crashed. One officer testified against the other officers on the scene. All of the officers were acquitted by an all-white jury. None of those facts can be found in the bias training provided by the state.

That slide has since been removed. So have several others, a process that began after Tampa Bay Times journalists began requesting documents from the Department of Law Enforcement. Others have been edited to remove outdated data or questionable assertions.

The training included three slides titled, “Real and Perceived Problems Faced by Minorities”— all of which were deleted from the course after the Times started asking about it.

One cited statistics from 2003 that showed 70% of white people and 41% of Black people had high confidence in police. The slide was updated this summer with a link to a lengthy Gallup poll on race relations that showed 80% of Black respondents felt they were treated less fairly than white people during traffic incidents. To see the recent data, officers would need to click the link.

Another potential source of misinformation was removed as well:

Before it was updated, the training included slides about Black and Latino communities having traffic death rates roughly triple those among white people. Experts questioned the accuracy of those numbers. They also said the slides, which were removed, appeared to try to justify the need to aggressively police communities of color.

Now, these edits will make the slide deck better than it was originally, but that’s a really low bar to set. To call the training “cursory” is to pay it a compliment it doesn’t deserve. At best, the presentation — in its original form — let cops know being openly racist was problematic but that this was just something to be clicked through and checked off the training list.

In its altered form — and those alterations were prompted by journalists, not by any law enforcement professional who had reviewed it or participated in the training — it’s not much better. It still gives the impression that fixing bias is something that can be accomplished by eliminating outward displays of bigotry, like the telling of racist jokes or use of bigoted terms.

That this exists at all is probably a positive sign. But what’s observed here shows it’s probably not the best idea to let cops write their own bias training. Nor is it a good idea to reduce something this important to a short slide show… not if you’re actually interested in changing cop culture.

Filed Under: florida, law enforcement, racial bias, racism, traffic stops

As far as I can tell, in the area the 5th Circuit appeals court has jurisdiction, websites no longer have any 1st Amendment editorial rights. That’s the result of what appears to me to be the single dumbest court ruling I’ve seen in a long, long time, and I know we’ve seen some crazy rulings of late. However, thanks to judge Andy Oldham, internet companies no longer have 1st Amendment rights regarding their editorial decision making.

Let’s take a step back. As you’ll recall, last summer, in a fit of censorial rage, the Texas legislature passed HB 20, a dangerously unconstitutional bill that would bar social media websites from moderating as they see fit. As we noted, the bill opens up large websites to a lawsuit over basically every content moderation decision they make (and that’s just one of the problems). Pretty quickly, a district court judge tossed out the entire law as unconstitutional in a careful, thorough ruling that explained why every bit of the law violated websites’ own 1st Amendment rights to put in place their own editorial policies.

On appeal to the 5th Circuit, the court did something bizarre: without giving any reason or explanation at all, it reinstated the law and promised a ruling at some future date. This was procedurally problematic, leading the social media companies (represented by two of their trade groups, NetChoice and CCIA) to ask the Supreme Court to slow things down a bit, which is exactly what the Supreme Court did.

Parallel to all of this, Florida had passed a similar law, and again a district court had found it obviously unconstitutional. That, too, was appealed, yet in the 11th Circuit the court rightly agreed with the lower court that the law was (mostly) unconstitutional. That teed things up for Florida to ask the Supreme Court to review the issue.

However, remember, back in May when Texas initially reinstated the law, it said it would come out with its full ruling later. Over the last few months I’ve occasionally pondered (sometimes on Twitter) whether the 5th Circuit would ever get around to actually releasing an opinion. And that’s what it just did. And, as 1st Amendment lawyer Ken White notes, it’s “the most angrily incoherent First Amendment decision I think I’ve ever read.”

It is difficult to state how completely disconnected from reality this ruling is, and how dangerously incoherent it is. It effectively says that companies no longer have a 1st Amendment right to their own editorial policies. Under this ruling, any state in the 5th Circuit could, in theory, mandate that news organizations must cover certain politicians or certain other content. It could, in theory, allow a state to mandate that any news organization must publish opinion pieces by politicians. It completely flies in the face of the 1st Amendment’s association rights and the right to editorial discretion.

There’s going to be plenty to say about this ruling, which will go down in the annals of history as a complete embarrassment to the judiciary, but let’s hit the lowest points. The crux of the ruling, written by Judge Andy Oldham, is as follows:

Today we reject the idea that corporations have a freewheeling First Amendment right to censor what people say. Because the district court held otherwise, we reverse its injunction and remand for further proceedings.

Considering just how long Republicans (and Oldham was a Republican political operative before being appointed to the bench) have spent insisting that corporations have 1st Amendment rights, this is a major turnaround, and (as noted) an incomprehensible one. Frankly, Oldham’s arguments sound much more like the arguments made by ignorant trolls in our comments than anyone with any knowledge or experience with 1st Amendment law.

I mean, it’s as if Judge Oldham has never heard of the 1st Amendment’s prohibition on compelled speech.

First, the primary concern of overbreadth doctrine is to avoid chilling speech. But Section 7 does not chill speech; instead, it chills censorship. So there can be no concern that declining to facially invalidate HB 20 will inhibit the marketplace of ideas or discourage commentary on matters of public concern. Perhaps as-applied challenges to speculative, now-hypothetical enforcement actions will delineate boundaries to the law. But in the meantime, HB 20’s prohibitions on censorship will cultivate rather than stifle the marketplace of ideas that justifies the overbreadth doctrine in the first place.

Judge Oldham insists that concerns about forcing websites to post speech from Nazis, terrorist propaganda, and Holocaust denial are purely hypothetical. Really.

The Platforms do not directly engage with any of these concerns. Instead, their primary contention—beginning on page 1 of their brief and repeated throughout and at oral argument—is that we should declare HB 20 facially invalid because it prohibits the Platforms from censoring “pro-Nazi speech, terrorist propaganda, [and] Holocaust denial[s].” Red Br. at 1.

Far from justifying pre-enforcement facial invalidation, the Platforms’ obsession with terrorists and Nazis proves the opposite. The Supreme Court has instructed that “[i]n determining whether a law is facially invalid,” we should avoid “speculat[ing] about ‘hypothetical’ or ‘imaginary’ cases.” Wash. State Grange, 552 U.S. at 449–50. Overbreadth doctrine has a “tendency . . . to summon forth an endless stream of fanciful hypotheticals,” and this case is no exception. United States v. Williams, 553 U.S. 285, 301 (2008). But it’s improper to exercise the Article III judicial power based on “hypothetical cases thus imagined.” Raines, 362 U.S. at 22; cf. SinenengSmith, 140 S. Ct. at 1585–86 (Thomas, J., concurring) (explaining the tension between overbreadth adjudication and the constitutional limits on judicial power).

These are not hypotheticals. This is literally what these websites have to deal with on a daily basis. And which, under Texas’ law, they no longer could do.

Oldham continually focuses (incorrectly and incoherently) on the idea that editorial discretion is censorship. There’s a reason that we’ve spent the last few years explaining how the two are wholly different — and part of it was to avoid people like Oldham getting confused. Apparently it didn’t work.

We reject the Platforms’ efforts to reframe their censorship as speech. It is undisputed that the Platforms want to eliminate speech—not promote or protect it. And no amount of doctrinal gymnastics can turn the First Amendment’s protections for free speech into protections for free censoring.

That paragraph alone is scary. It basically argues that the state can now compel any speech it wants on private property, as it reinterprets the 1st Amendment to mean that the only thing it limits is the power of the state to remove speech, while leaving open the power of the state to foist speech upon private entities. That’s ridiculous.

Oldham then tries to square this by… pulling in wholly unrelated issues around the few rare, limited, fact-specific cases where the courts have allowed compelled speech.

Supreme Court precedent instructs that the freedom of speech includes “the right to refrain from speaking at all.” Wooley v. Maynard, 430 U.S. 705, 714 (1977); see also W. Va. State Bd. of Educ. v. Barnette, 319 U.S. 624, 642 (1943). So the State may not force a private speaker to speak someone’s else message. See Wooley, 430 U.S. at 714.

But the State can regulate conduct in a way that requires private entities to host, transmit, or otherwise facilitate speech. Were it otherwise, no government could impose nondiscrimination requirements on, say, telephone companies or shipping services. But see 47 U.S.C. § 202(a) (prohibiting telecommunications common carriers from “mak[ing] any unjust or unreasonable discrimination in charges, practices, classifications, regulations, facilities, or services”). Nor could a State create a right to distribute leaflets at local shopping malls. But see PruneYard Shopping Ctr. v. Robins, 447 U.S. 74, 88 (1980) (upholding a California law protecting the right to pamphleteer in privately owned shopping centers). So First Amendment doctrine permits regulating the conduct of an entity that hosts speech, but it generally forbids forcing the host itself to speak or interfering with the host’s own message.

From there, he argues that forcing websites to host speech they disagree with is not compelled speech.

The Platforms are nothing like the newspaper in Miami Herald. Unlike newspapers, the Platforms exercise virtually no editorial control or judgment. The Platforms use algorithms to screen out certain obscene and spam-related content. And then virtually everything else is just posted to the Platform with zero editorial control or judgment.

Except that’s the whole point. The websites do engage in editorial control. The difference from newspapers is that it’s ex post control. If there are complaints, they will review the content afterwards to see if it matches with their editorial policies (i.e., terms of use). So, basically, Oldham is simply wrong here. They do exercise editorial control. That they use it sparingly does not mean they give up the right. Yet Oldham thinks otherwise.

From there, Oldham literally argues there is no editorial discretion under the 1st Amendment. Really.

Premise one is faulty because the Supreme Court’s cases do not carve out “editorial discretion” as a special category of First-Amendment-protected expression. Instead, the Court considers editorial discretion as one relevant consideration when deciding whether a challenged regulation impermissibly compels or restricts protected speech.

To back this up, the court cites Turner v. FCC, which has recently become a misleading favorite among those who are attacking Section 230. But the Turner case really turned on some pretty specific facts about cable TV versus broadcast TV which are not at all in play here.

Oldham also states that content moderation isn’t editorial discretion, even though it literally is.

Even assuming “editorial discretion” is a freestanding category of First-Amendment-protected expression, the Platforms’ censorship doesn’t qualify. Curiously, the Platforms never define what they mean by “editorial discretion.” (Perhaps this casts further doubt on the wisdom of recognizing editorial discretion as a separate category of First-Amendment-protected expression.) Instead, they simply assert that they exercise protected editorial discretion because they censor some of the content posted to their Platforms and use sophisticated algorithms to arrange and present the rest of it. But whatever the outer bounds of any protected editorial discretion might be, the Platforms’ censorship falls outside it. That’s for two independent reasons.

And here it gets really stupid. The ruling argues that because of Section 230, internet websites can’t claim editorial discretion. This is a ridiculously confused misreading of 230.

First, an entity that exercises “editorial discretion” accepts reputational and legal responsibility for the content it edits. In the newspaper context, for instance, the Court has explained that the role of “editors and editorial employees” generally includes “determin[ing] the news value of items received” and taking responsibility for the accuracy of the items transmitted. Associated Press v. NLRB, 301 U.S. 103, 127 (1937). And editorial discretion generally comes with concomitant legal responsibility. For example, because of “a newspaper’s editorial judgments in connection with an advertisement,” it may be held liable “when with actual malice it publishes a falsely defamatory” statement in an ad. Pittsburgh Press Co. v. Pittsburgh Comm’n on Human Rels., 413 U.S. 376, 386 (1973). But the Platforms strenuously disclaim any reputational or legal responsibility for the content they host. See supra Part III.C.2.a (quoting the Platforms’ adamant protestations that they have no responsibility for the speech they host); infra Part III.D (discussing the Platforms’ representations pertaining to 47 U.S.C. § 230)

Then, he argues that there’s some sort of fundamental difference between exercising editorial discretion before or after the content is posted:

Second, editorial discretion involves “selection and presentation” of content before that content is hosted, published, or disseminated. See Ark. Educ. Television Comm’n v. Forbes, 523 U.S. 666, 674 (1998); see also Miami Herald, 418 U.S. at 258 (a newspaper exercises editorial discretion when selecting the “choice of material” to print). The Platforms do not choose or select material before transmitting it: They engage in viewpoint-based censorship with respect to a tiny fraction of the expression they have already disseminated. The Platforms offer no Supreme Court case even remotely suggesting that ex post censorship constitutes editorial discretion akin to ex ante selection.17 They instead baldly assert that “it is constitutionally irrelevant at what point in time platforms exercise editorial discretion.” Red Br. at 25. Not only is this assertion unsupported by any authority, but it also illogically equates the Platforms’ ex post censorship with the substantive, discretionary, ex ante review that typifies “editorial discretion” in every other context

So, if I read that correctly, websites can now continue to moderate only if they pre-vet all content they post. Which is also nonsense.

From there, Oldham goes back to Section 230, where he again gets the analysis exactly backwards. He argues that Section 230 alone makes HB 20’s provisions constitutional, because it says that you can’t treat user speech as the platform’s speech:

We have no doubts that Section 7 is constitutional. But even if some were to remain, 47 U.S.C. § 230 would extinguish them. Section 230 provides that the Platforms “shall [not] be treated as the publisher or speaker” of content developed by other users. Id. § 230(c)(1). Section 230 reflects Congress’s judgment that the Platforms do not operate like traditional publishers and are not “speak[ing]” when they host usersubmitted content. Congress’s judgment reinforces our conclusion that the Platforms’ censorship is not speech under the First Amendment.

Section 230 undercuts both of the Platforms’ arguments for holding that their censorship of users is protected speech. Recall that they rely on two key arguments: first, they suggest the user-submitted content they host is their speech; and second, they argue they are publishers akin to a newspaper. Section 230, however, instructs courts not to treat the Platforms as “the publisher or speaker” of the user-submitted content they host. Id. § 230(c)(1). And those are the exact two categories the Platforms invoke to support their First Amendment argument. So if § 230(c)(1) is constitutional, how can a court recognize the Platforms as First-Amendment-protected speakers or publishers of the content they host?

Oldham misrepresents the arguments of websites that support Section 230, claiming that by using 230 to defend their moderation choices they have claimed in court they are “neutral tools” and “simple conduits of speech.” But that completely misrepresents what has been said and how this plays out.

It’s an upside down and backwards misrepresentation of how Section 230 actually works.

Oldham also rewrites part of Section 230 to make it work the way he wants it to. Again, this reads like some of our trolls, rather than how a jurist is supposed to act:

The Platforms’ only response is that in passing § 230, Congress sought to give them an unqualified right to control the content they host— including through viewpoint-based censorship. They base this argument on § 230(c)(2), which clarifies that the Platforms are immune from defamation liability even if they remove certain categories of “objectionable” content. But the Platforms’ argument finds no support in § 230(c)(2)’s text or context. First, § 230(c)(2) only considers the removal of limited categories of content, like obscene, excessively violent, and similarly objectionable expression. It says nothing about viewpoint-based or geography-based censorship. Second, read in context, § 230(c)(2) neither confers nor contemplates a freestanding right to censor. Instead, it clarifies that censoring limited categories of content does not remove the immunity conferred by § 230(c)(1). So rather than helping the Platforms’ case, § 230(c)(2) further undermines the Platforms’ claim that they are akin to newspapers for First Amendment purposes. That’s because it articulates Congress’s judgment that the Platforms are not like publishers even when they engage in censorship.

Except that Section 230 does not say “similarly objectionable.” It says “otherwise objectionable.” By switching “otherwise objectionable” to “similarly objectionable,” Oldham is insisting that courts like his own get to determine what counts as “similarly objectionable,” and that alone is a clear 1st Amendment problem. The courts cannot decide what content a website finds objectionable. That is, yet again, the state intruding on the editorial discretion of a website.

Also, completely ridiculously, Oldham leaves out that (c)(2) does not just include that list of objectionable categories, but it states: “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” In other words, the law explicitly states that whether or not something falls into that list is up to the provider or user and not the state. To leave that out of his description of (c)(2) is beyond misleading.

Also notable: Oldham completely ignores the fact that Section 230 pre-empts state laws like Texas’s, saying that “no liability may be imposed under any State or local law that is inconsistent with this section.” I guess Oldham is arguing that Texas’s law somehow is not inconsistent with 230, but it certainly is inconsistent with two and a half decades of 230 jurisprudence.

There’s then a long and, again, nonsensical discussion of common carriers, basically saying that the state can magically declare social media websites common carriers. I’m not even going to give that argument the satisfaction of covering it, it is so disconnected from reality. Social media literally meets none of the classifications of traditional common carriers. The fact that Oldham claims, that “the Platforms are no different than Verizon or AT&T” makes me question how anyone could take anything in this ruling seriously.

I’m also going to skip over the arguments for why the “transparency” bits are constitutional according to the 5th Circuit, other than to note that California must be happy, because under this ruling its new social media transparency laws would also be deemed constitutional even if they now conflict with Texas’s (that’ll be fun).

There are a few notable omissions from the ruling. It never mentions ACLU v. Reno, which seems incredibly relevant given its discussion of how the internet and the 1st Amendment work together, and is glaring in its absence. Second, it completely breezes past Justice Kavanaugh’s ruling in the Halleck case, which clearly established that under the First Amendment a “private entity may thus exercise editorial discretion over the speech and speakers in the forum.” The only mention of the ruling is in a single footnote, claiming that ruling only applies to “public forums” and saying it’s distinct from the issue raised here. But, uh, the quote (and much of the ruling) literally says the opposite. It’s talking about private forums. This is ridiculous. Third, as noted, the ruling ignores the pre-emption aspects of Section 230. Fourth, while it discusses the 11th Circuit’s ruling regarding Florida’s law, it tries to distinguish the two (while also highlighting where the two Circuits disagree to set up the inevitable Supreme Court battle). Finally, it never addresses the fact that the Supreme Court put its original “turn the law back on” ruling on hold. Apparently Oldham doesn’t much care.

The other two judges on the panel also provided their own, much shorter opinions, with Judge Edith Jones concurring and just doubling down on Oldham’s nonsense. There is an opinion from Judge Leslie Southwick that is a partial concurrence and partial dissent. It concurs on the transparency stuff, but dissents regarding the 1st Amendment.

The majority frames the case as one dealing with conduct and unfair censorship. The majority’s rejection of First Amendment protections for conduct follows unremarkably. I conclude, though, that the majority is forcing the picture of what the Platforms do into a frame that is too small. The frame must be large enough to fit the wide-ranging, free-wheeling, unlimited variety of expression — ranging from the perfectly fair and reasonable to the impossibly biased and outrageous — that is the picture of the First Amendment as envisioned by those who designed the initial amendments to the Constitution. I do not celebrate the excesses, but the Constitution wisely allows for them.

The majority no doubt could create an image for the First Amendment better than what I just verbalized, but the description would have to be similar. We simply disagree about whether speech is involved in this case. Yes, almost none of what others place on the Platforms is subject to any action by the companies that own them. The First Amendment, though, is what protects the curating, moderating, or whatever else we call the Platforms’ interaction with what others are trying to say. We are in a new arena, a very extensive one, for speakers and for those who would moderate their speech. None of the precedents fit seamlessly. The majority appears assured of their approach; I am hesitant. The closest match I see is caselaw establishing the right of newspapers to control what they do and do not print, and that is the law that guides me until the Supreme Court gives us more.

Judge Southwick then dismantles, bit by bit, each of Oldham’s arguments regarding the 1st Amendment and basically highlights how his much younger colleague is clearly misreading a few outlier Supreme Court rulings.

It’s a good read, but this post is long enough already. I’ll just note this point from Southwick’s dissent:

In no manner am I denying the reasonableness of the governmental interest. When these Platforms, that for the moment have gained such dominance, impose their policy choices, the effects are far more powerful and widespread than most other speakers’ choices. The First Amendment, though, is not withdrawn from speech just because speakers are using their available platforms unfairly or when the speech is offensive. The asserted governmental interest supporting this statute is undeniably related to the suppression of free expression. The First Amendment bars the restraints.

This resonated with me quite a bit, and drove home the problem with Oldham’s argument. It is the equivalent of one of Ken White’s famed free speech tropes. Oldham pointed to the outlier cases where some compelled speech was found constitutional, and turned that automatically into “if some compelled speech is constitutional, then it’s okay for this compelled speech to be constitutional.”

But that’s not how any of this works.

Southwick also undermines Oldham’s common carrier arguments and his Section 230 arguments, noting:

Section 230 also does not affect the First Amendment right of the Platforms to exercise their own editorial discretion through content moderation. My colleague suggests that “Congress’s judgment” as expressed in 47 U.S.C. § 230 “reinforces our conclusion that the Platforms’ censorship is not speech under the First Amendment.” Maj. Op. at 39. That opinion refers to this language: “No provider or user of an interactive computer service” — interactive computer service being a defined term encompassing a wide variety of information services, systems, and access software providers — “shall be treated as the publisher or speaker of any information provided by another content provider.” 47 U.S.C. § 230(c)(1). Though I agree that Congressional fact-findings underlying enactments may be considered by courts, the question here is whether the Platforms’ barred activity is an exercise of their First Amendment rights. If it is, Section 230’s characterizations do not transform it into unprotected speech.

The Platforms also are criticized for what my colleague sees as an inconsistent argument: the Platforms analogize their conduct to the exercise of editorial discretion by traditional media outlets, though Section 230 by its terms exempts them from traditional publisher liability. This may be exactly how Section 230 is supposed to work, though. Contrary to the contention about inconsistency, Congress in adopting Section 230 never factually determined that “the Platforms are not ‘publishers.’” Maj. Op. at 41. As one of Section 230’s co-sponsors — former California Congressman Christopher Cox, one of the amici here — stated, Section 230 merely established that the platforms are not to be treated as the publishers of pieces of content when they take up the mantle of content moderation, which was precisely the problem that Section 230 set out to solve: “content moderation . . . is not only consistent with Section 230; its protection is the very raison d’etre of Section 230.” In short, we should not force a false dichotomy on the Platforms. There is no reason “that a platform must be classified for all purposes as either a publisher or a mere conduit.” In any case, as Congressman Cox put it, “because content moderation is a form of editorial speech, the First Amendment more fully protects it beyond the specific safeguards enumerated in § 230(c)(2).” I agree.

Anyway, that’s the quick analysis of this mess. There will be more to come, and I imagine this will be an issue for the Supreme Court to sort out. I wish I had confidence that they would not contradict themselves, but I’m not sure I do.

The future of how the internet works is very much at stake with this one.

Filed Under: 1st amendment, 5th circuit, andy oldham, content moderation, hb 20, leslie southwick, social media, texas

At some point in the last five years, people in positions of media influence and power unilaterally decided that NYU marketing professor Scott Galloway was supposed to be everywhere, constantly, pontificating about absolutely everything, constantly. As a result, you now can’t go fifteen minutes without Galloway, who makes an estimated $5 million annually in speaking fees alone, wandering into punditry eyeline.

This week, Galloway spent his time pushing the hot DC claim du jour: that TikTok is a profound menace to the planet and should be banned. He made the point at the Vox Code conference, then hopped over to Bill Maher’s HBO show to make a similar pronouncement:

WATCH: @profgalloway lays it down: “TikTok should be banned, full-stop” pic.twitter.com/dlML4Y695c

— Marcus Baram (@mbaram) September 10, 2022

Like most of the folks who hyperventilate about TikTok (see FCC Commissioner Brendan Carr, or new billionaire Politico owner Mathias Döpfner), there’s really not a lot of substance here. The underlying claim is that China directly controls TikTok, and will inevitably use the very popular social media platform to spy on or influence American children in nefarious and very frightening ways.

Actual evidence of TikTok being uniquely dangerous (especially any indication China has used or could use TikTok to bedazzle U.S. children) has been sorely lacking, but that doesn’t stop folks from heading to the fainting couches. This face fanning has been especially popular among a certain set of xenophobic DC politicians, and companies that don’t want to have to directly compete with China.

With 4.8 billion internet users, TikTok is a smashing success. For now. It’s also owned by Chinese company Bytedance. The concern is that the Chinese government will work with Bytedance to exploit U.S. TikTok user data for nefarious purposes, or use the platform to feed U.S. kids propaganda. The other thought is because China bans U.S. services and apps, we should ban TikTok in kind.

The problem: the U.S. is a corrupt, xenophobic, superficial dumpster fire, so most of the “solutions” to this potential problem have been stupid and performative.

Trump’s solution, you’ll recall, was to use an unconstitutional executive order to force Bytedance to sell TikTok to his buddies over at Walmart and Oracle. You know, the same Oracle with a long history of privacy violations, super dodgy legal and lobbying practices, and a CEO who may or may not believes in this whole democracy thing.

That dumb deal fell apart, but Oracle still managed to secure itself a lucrative gig hosting U.S. TikTok user data, and a key role determining TikTok’s content moderation practices. So basically our ingenious solution to the “TikTok might harm the kids” was to tether it to an extremely shady U.S. company with extremely shaky ethics, low privacy standards, and its own profitable tethers to China.

Why Banning TikTok Doesn’t Fix What You Think It Does

Enter the “ban TikTok!” folks, who think they’ve got a simple solution to a complicated problem.

Here’s the thing: you could ban TikTok immediately, and China could hoover up location, browsing, and behavior data from an ocean of completely unaccountable and hugely shady data brokers and middlemen. And they can do that because U.S. privacy and security standards are hot garbage. And in some instances, they’re hot garbage because of the same people now complaining about TikTok.

Both Carr and Cruz have extensive histories of undermining regulatory oversight and privacy rules at absolutely every opportunity, yet both are lauded by Galloway in a blog post for being heroic leaders in the “ban TikTok” crusades. Galloway’s a top pundit, yet somehow can’t see that Carr and Cruz are engaged in a zero-calorie xenophobic theatrics, and couldn’t care less about actual consumer privacy.

For literally thirty straight years, at absolutely every single turn, we prioritized making money over transparency or consumer privacy. As a result, consumer privacy protections are garbage, regulators are toothless, governments exploit the attention economy to avoid having to get warrants, and any idiot with a nickel can easily build gigantic, hugely detailed profiles about your everyday life without your consent.

If you’re only just now waking up to this threat exclusively because China might abuse this massive, unaccountable mess we’ve created, you’re both arriving late to the party, and you’re not really understanding the full scope of the problem.

“Banning TikTok” does nothing meaningful if you’re genuinely interested in meaningful surveillance and privacy reform. There will always be another TikTok. There’s an ocean of companies engaging in the same or worse behavior as TikTok because we’ve sanctioned this kind of guardrail-optional hyper-collection and monetization of consumer behavioral data at every step of the way.

As Washington Post reporter Taylor Lorenz notes, fixating on “ban TikTok and the bad man will be defeated” is a simplistic distraction from our longstanding failures on consumer privacy and consumer protection:

Instead of focusing on broader privacy and consumer protection laws, which would affect US platforms like FB, people like Scott scapegoat and fear monger over TikTok. It’s an intentional distraction from real, comprehensive data/privacy reform and ppl should see it for what it is https://t.co/fb1UeuAoLq

— Taylor Lorenz (@TaylorLorenz) September 11, 2022

Many of the folks beating the “ban TikTok” drum may be well intentioned but just don’t really understand how broken the consumer privacy landscape is. They may not understand that this is a problem that’s exponentially more complicated than just what we do with a single app. Freaking out exclusively about a single app tells me you either don’t really understand the data-hoovering monster we’ve built, or don’t really care if anybody other than China exploits it (waves tiny American flag patriotically).

Many of the other folks calling for a TikTok ban aren’t operating in good faith. Facebook/Meta, for example, spends a lot of time spreading scary stories about TikTok in the press and DC because they want to crush a competitive threat they’ve been incapable of out-innovating. Similar, Politico’s owner is on the Netflix board and simply wants to curtail what he sees as a threat to market and advertising mindshare.

Then there’s just a ton of Silicon Valley folks who believe they inherently own and deserve the advertising market share TikTok occupies. And then of course there’s just a whole bunch of rank bigots who are mad because darker skinned human beings built a popular app, and try to hide this bigotry behind patriotic, pseudo national security concerns.

All of this converges to create a stupid, soupy mess that’s devoid of any actual fixes to any actual problems. Hyper surveillance and propaganda are very real problems that require a dizzying array of complicated fixes, including media and privacy policy reform, antitrust reform, tougher consumer protection standards, education reform, and a meaningful privacy law for the internet era.

We need to fund and staff the FTC, whose privacy enforcement division is pathetic by international standards. We need real technical solutions for real problems (the cellular network SS7 flaw, the sorry state of satellite security, the rampant lack of security in the Internet of things space).

We don’t get these things because real policy solutions are boring and don’t get the ad engagement clicks (you’ll never get on Bill Maher talking about antitrust reform). We don’t get these things because they might harm the revenues of U.S. companies. Even slightly more strict guard rails on data collection and monetization might inconvenience rich men like Mark Zuckerberg and Mathias Döpfner.

So what we get instead are rich influencers nabbing clicks by suggesting you can cure a significant portion of technology’s biggest issues by banning a simple app. Which is a shame, because most of the warnings privacy advocates have levied for decades are coming home to roost in an ugly post-Roe reality, and time is growing short when it comes to implementing actual, meaningful reform.

Filed Under: ban, china, consumer protection, fcc, ftc, national security, privacy, scott galloway, spying, surveillance Companies: tiktok

Tuesday, former Twitter cybersecurity executive Pieter “Mudge” Zatko testified in front of a congressional committee regarding his whistleblower complaint[1][2][3] against Twitter. Though I’m a techie, I thought I’d write up some comments from the business angle.

It’s difficult getting an unbiased viewpoint of the actual issues. The press sides with whistleblowers. The cybersecurity community sides with champions – those who fight for the Cause of ever more security.

The thing is, on its face, Mudge’s complaint is false. It’s based on the claim that Twitter “lied” about its cybersecurity to the government, shareholders, and its users. But there’s no objective evidence of this, only the subjective opinion of Mudge that Twitter wasn’t doing enough for cybersecurity.

What I see here is that Mudge is acting as a cybersecurity activist. The industry has many activists who believe security is a Holy Crusade, a Cause, a Moral duty, an End in itself. The crusaders are regularly at odds with business leaders who view cybersecurity merely as a means to an end, and apply a cost-vs-benefit analysis to it.

If you hire an activist, such a falling out is inevitable. It’s like if oil companies hired a Greenpeace activist to be an executive. Or like how Google hires activists to be “AI ethicists” and then later has to keep firing them [#1][#2][#3].

Mudge is a technical expert going back decades. He was there at the beginning (I define the 1990s as the beginning), and his work helped shape today’s InfoSec industry. He’s got a lot of credibility in the industry, and it’s all justified.

He was hired for most of 2021 to be Twitter’s head of cybersecurity issues. He was fired at the start of 2022, and last month he filed a “whistleblower complaint” with the government, alleging lax cybersecurity practices, specifically that Twitter lied to investors and failed to live up to a 2011 FTC agreement to secure “private” data.

There’s no particular reason to distrust Mudge. Twitter would certainly like to discredit him as being disgruntled for being fired. But that’s unlikely.

Instead, what I read in the complaint is being disgruntled over cybersecurity (not over being fired). This has been the case for much of his career. He thinks people should do more to be secure. His “Cyber UL” effort is a good example, as he pressured IoT device makers to follow a strict set of cybersecurity rules. For fellow activists, the desired set of rules were just the beginning. For business types, they were excessive, with costs that outweighed their benefits.

Is Twitter secure? Maybe, probably not. Twitter trails the FAANG leaders in the industry (Facebook, Apple, Amazon, Netflix, Google) in a number of technical areas, so it’s easy to think they are behind in cybersecurity as well. On the other hand, they are ahead of most of the rest of the tech industry, not first tier maybe, but definitely second tier.

In other words, in all likelihood, Twitter is ahead of the norm, ahead of the average, just not up to the same standard set by the leaders in tech.

But for cybersecurity activists, even the FAANG companies are not secure enough. That’s because nobody is ever secure enough. There is no standard for which you can say “we are secure enough”.

By any rational measure, the Internet is secure enough. For example, during the pandemic, restaurants put menus and even ordering online, accessible via the browser or app, to minimize customer contact with staff. Paying by credit card using these apps and services was still more “secure” than giving the staff your credit card physically. This was true even if you were accessing the net over the local unencrypted WiFi.

There is a huge disconnect between what the real world considers “secure enough” vs. cybersecurity activists.

One of Mudge’s complaints was about servers being out-of-date. Cybersecurity activists have a fetish for up-to-date software, seeing the failure to keep everything up-to-date all-the-time as some sort of moral weakness (sloth, villainy, greed).

But the business norm is out-of-date software. For example, if you go on Amazon AWS right now and spin up a new default RedHat instance, you get RedHat 7, which first shipped in 2014 (eight years ago). Yes, it’s still nominally supported with security patches, but it lacks many modern features needed for better security.

The subjective claim is that Twitter was deficient for not having the latest software. That’s just the cyber-activist point of view. From the point of view of industry, it’s the norm.

The entire complaint reads the same. It’s a litany of the standard complaints, slightly modified to apply to Twitter, that the entire industry has against their employers. It’s all based upon their companies not doing enough.

Of particular note is the Twitter-specific issue of protecting private information like Direct Messages (DMs). The thing is, anything less than end-to-end encryption is still a failure. Mudge points to a lack of disk encryption, and the fact that thousands of employees had access to private DMs, that this means they aren’t “secure.” But even if that wasn’t the case, DMs still wouldn’t be secure, because they aren’t end-to-end encrypted.

Twitter isn’t lying about this. They aren’t claiming DMs are end-to-end encrypted. I suppose they are deficient in not making it clearer that DMs aren’t as private as some users might hope.

But the solution cyber-activists want isn’t transparency into the lack of DM security, but more DM security. They aren’t asking Twitter to be clear about how they prevent prying eyes from seeing DMs, they are demanding absolute security for the DMs. This reveals their fundamental prejudice.

He wasn’t an executive

Being an activist meant that Mudge wasn’t an executive. His goal wasn’t to further the interests of the company/shareholders. His goal was to further the interests of cybersecurity.

One of these days I’m going to write a guide explaining business to hackers. This will be one of the articles I’ll be writing, explaining executives to rank-and-file underlings.

What we see here is Mudge acting like an underling instead of an executive.

Part of his complaint is that the now-CEO, Parag Agrawal, pressured him into lying to the board, to claim to the risk committee of the board that security is better than it really was.

Of course Agrawal did. He’s supposed to do that — push hard for his point-of-view. And Mudge was supposed to push just as hard back, especially if he perceives the request as being asked to lie.

The thing you need to learn about corporate executives is that they are given a lot of responsibility, and a lot of power, but nonetheless must compromise and cooperate.

Underlings often don’t really grasp this. They don’t have responsibility. Like when you hear about a company blaming a compromise on an intern — false on its face because interns don’t have responsibility. Underlings don’t have a lot of power, either. Lastly, underlings lack skills for compromise and collaboration, but that’s okay, because “teamwork” is more of a platitude than a requirement at their level.

To achieve their personal responsibilities, executives must push hard on others. To a certain extent, this means all executives are jerks. But at the same time, they expect fellow executives to push back just as hard; they expect that there is give-and-take, compromise, and collaboration for the ultimate good of the corporation. They expect that when they push hard on the parts that concern them, you push just as hard back to defend your turf, knowing that you seek your goals. But, they also expect that such pushback is driving toward compromise, not scorched-earth victory for your side.

If you, as the typical underling, are called to report something to a board committee, you can expect that one or more executives are going to talk to you in order to influence what you are going to say. I’ve dealt with many cybersecurity underlings in this position and heard their tales, and frankly, they handled the situations better than Mudge seems to have.

Underlings expect that their bosses will help defend them in their work disputes. But executives don’t have that luxury. They are at the top of the food chain and are themselves responsible for resolving conflicts. There is nobody to go to in order to complain: not the board who only wants results, and not HR, because you are above HR. Not anybody — you have to resolve your own disputes.

Mudge’s complaint seems to be about looking for dispute resolution in the court of public opinion, because he was unable to resolve his dispute with Agrawal himself.

A good example of a true executive resigning is when James Mattis resigned as Trump’s Secretary of Defense. In his letter, he lamented the fact that he and Trump didn’t agree:

Because you have the right to have a Secretary of Defense whose views are better aligned with yours on these and other subjects, I believe it is right for me to step down from my position.

Note that Mattis doesn’t claim there’s some subjective measure of which side is right and which side is wrong. Instead, Mattis only claims that they couldn’t agree.

In contrast, Mudge’s complaint is full of the assertions that he’s objectively right, and Agrawal objectively wrong. And since it’s objective that he was wrong, Agrawal must’ve been lying.

As a former executive, and somebody who consults with executives, I find Mudge’s description of the events shocking. He’s talking like a whiny underlying, not like an executive.

Mudge’s complaint touches on a few ethical issues.

Most such ethical issues are really politics in disguise. Facebook found this out with their attempts to deal with misinformation ethics and AI ethics. They found it just opened festering political wounds.

If you can somehow avoid politics then you’ll get mired in academics. To be fair, when you ignore academic philosophy, you’ll end up re-inventing Kant vs. Hegel, and doing it poorly. But at the same time, academics can spend years debating Kant vs. Hegel and still come to no conclusion.

But what we are talking about here is professional ethics, and that’s much simpler. Most professional ethics are about protecting trust in the profession (“don’t lie”) and resolving conflicts you are likely to encounter. For example, journalists’ ethics involve long discussions of “off the record” stuff, because it’s an issue they regularly encounter.

Cybersecurity has the wrong belief that “security” is their highest ethical duty, to the point where they think it’s good to lie to people for their own good, as long as doing so achieves better security.

This activism has hugely damaged our profession. Most cybersecurity professionals are frustrated that they can’t get business leaders to listen to them. When you talk to the other side, to the business leaders, you’ll see that the primary reason they don’t listen is that they don’t trust the cybersecurity professionals. Maybe you are truthful, but they still won’t listen to you because the legions of cybersecurity professionals who have preceded you tried to mislead business leaders to get their way — to serve the Holy Crusade.

The opposite side of the coin are those demanding cybersecurity professionals downplay their honest concerns. For example, when a pentester hands over a report documenting how easy it was to break in, the person who hired them may ask for certain things to be edited, to downplay the severity of what was found.

It’s a difficult problem. Sometimes they are right. Sometimes the issue is exaggerated. Sometimes it’s written in a way that can be misinterpreted.

But sometimes, they are just asking the pentester to lie on their behalf.

We should have a professional ethics guide in our industry. It should say that in such situations you don’t lie. One way you can solve this is to have them put their request in writing, which filters out most illegitimate requests. Another way is using the passive voice and such, to make sure that some statement won’t be confused as being your opinion.

Mudge describes a case where Agrawal specifically requested things not be put into writing. This is a big red flag, a real concern.

But at the same time, it’s not an automatic failure. It’s a common problem that things put in writing can be misleading when taken out of context. This happens all the time, especially in lawsuits, where the opposing side will cherry pick things out of context to show the jury. Long term executives learn to avoid written statements that can be used misleadingly against them in a court of law.

But here, the issue was avoiding things in writing that could confuse the board. That’s worrisome. I’m not sure I believe Mudge’s one-sided account, being that his other descriptions are so problematic. Even when somebody explicitly asks you to lie, they will remember the discussion much differently, that they didn’t ask you to lie.

The solution to such problems, if you find yourself in them, is to push back in a collaborative manner. Saying something like “I won’t lie to the board for you” is combative, not constructive. Saying “I don’t understand what you are asking me to do. I think that would mislead the board, which I couldn’t do, of course.”

The thing that’s important here is that “ethics” aren’t an excuse to attack your opponent. It’s easy to deliberately misinterpret the statements and actions of another as representing an ethical failure. Your primary duty is to protect your own ethics.

I’m a techie, as techie as they get.

But I’ve also been an executive and interacted with executives at many companies. What I read here in Mudge’s complaint aren’t the words of an executive, but the words of an activist. It has all the clichés of cybersecurity activism and the immaturity of underlings in resolving disputes.

You won’t get a critical discussion of this event in the press, as they generally take the side of the whistleblower. You won’t get a critical discussion from the InfoSec community, because they worship rock stars, and share the Holy Crusade for better cybersecurity.

I have no doubt Twitter’s cybersecurity is behind that of FAANG leaders in the tech industry. They seem behind on so many other issues. What freaks me out isn’t that their 500,000 servers are running outdated Linux (as Mudge describes). It freaks me out that this means that they have 1 server for each 1000 users (Netflix, whose demands are higher, has 10,000 users per server).

But saying Twitter is flawed is far from saying there’s any objective evidence in the whistleblower complaint that Twitter is misleading shareholders, government agencies like the FTC, or users as to their security.

Robert Graham is a well known security professional. You can follow him on Twitter at @ErrataRob. A version of this post was originally posted to his Substack and reposted here with permission.

Filed Under: activism, cybersecurity, mudge, pieter zatko, security, tradeoffs Companies: twitter

On Thursday, the White House hosted the United We Stand summit, to bring together people to take action against what they refer to as “hate-fueled violence.” This seems like a good idea for a summit, at a time when so much of politics is focused on grievances and culture wars that seem to inevitably lead to bigotry and violence. It’s good to see that the White House can actually talk about some of this and take a stand, rather than cowering behind traditional platitudes.

Indeed, in addition to the summit, the White House announced a bunch of initiatives that… actually sound pretty good, in general. More funding for education, and for community organizations that combat violence and hate, and more tools for helping digital literacy. I’m perhaps less convinced that some of the other plans make sense, including funding for law enforcement (which has been a bastion of hatred itself lately) and efforts to “increase school security,” which seem to be about turning schools into prison-like atmospheres with security theater that makes children less safe.

As part of the summit, a bunch of the big social media platforms announced new policies to be more aggressive towards hatred, and most of those sound pretty reasonable.

YouTube is expanding its policies to combat violent extremism by removing content glorifying violent acts for the purpose of inspiring others to commit harm, fundraise, or recruit, even if the creators of such content are not related to a designated terrorist group. YouTube will also launch an educational media literacy campaign across its platform to assist younger users in particular in identifying different manipulation tactics used to spread misinformation – from using emotional language to cherry picking information. This campaign will first launch in the U.S. before expanding to other countries over time. Finally, YouTube will support the McCain Institute and EdVenture Partners’ Invent2Prevent program with ongoing funding and training. The program challenges college students to develop their own dynamic products, tools, or initiatives to prevent targeted violence and terrorism.]

Twitch will accelerate its ongoing commitment to deterring hate in the livestreaming space this year by releasing a new tool that empowers its streamers and their communities to help counter hate and harassment and further individualize the safety experience of their channels. Twitch will also launch new community education initiatives on topics including identifying harmful misinformation and deterring hateful violence.

Microsoft is expanding its application of violence detection and prevention artificial intelligence (AI) and Machine Learning (ML) tools and using gaming to build empathy in young people. The company has developed AI/ML tools with appropriate privacy protections that can help detect credible threats of violence or to public safety, and is making a basic, more affordable version of these tools accessible to schools and smaller organizations to assist in violence prevention. Microsoft is also developing a new experience on Minecraft: Education Edition to help students, families and educators learn ways to build a better and safer online and offline world through respect, empathy, trust and safety.

Meta is forging a new research partnership with the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism to analyze trends in violent extremism and tools that help communities combat it. Meta will also partner with Search For Common Ground to provide trainings, workshops, and skill-building to equip community-based partners working locally to counter hate-fueled violence with tools to help amplify their work.

And… that’s all good? But, it’s weird, because part of what enables all of the above companies to do this sorta stuff is the fact that they know they have Section 230 which, along with the 1st Amendment, helps protect them against frivolous lawsuits over their content moderation decisions.

And even though it’s Section 230 that helps enable sites to do this… in his own speech, Biden lashed out at the tech companies and Section 230.

I think most of the speech is actually pretty good, honestly. But, not for the first time, Biden gets weirdly focused on internet companies and Section 230, as if they’re the problem. At minute 22:40 in the video above he says:

And hold social media platforms accountable for spreading hate and fueling violence.

And I’m calling on Congress to get rid of special immunity for social media companies and impose much stronger transparency requirements on all of them.

This is all extremely confused and ridiculously counterproductive. We’ve explained this before, and have even had Biden advisors insist that they understand these issues, but it appears that no one is able to explain it to the President.

First of all, if you want social media companies to figure out the best ways to deal with hate and fueling violence, you need Section 230, because it’s what allows them the freedom to experiment and put in place the other ideas mentioned above. It allows them the ability to test different ideas, and not face crippling liability for mistakes. It allows them to see what actually has an impact and what works.

Removing Section 230 goes against all of those wishes. Because of the nature of the 1st Amendment, without Section 230, many websites are more likely to take a totally hands off approach to moderation. Because they only liability they can face under the 1st Amendment standards endorsed by the courts is if they have actual knowledge of law-violating content on their site. The easiest way to avoid that is not to look and not to moderate much at all.

In other words, this call to remove Section 230 will encourage many sites to do less moderation and to allow more hatred to roam free.

Biden seems to falsely believe that removing Section 230 will magically make hate illegal, and create a cause of action with which people can sue websites. That’s just fundamentally wrong. Whether we like it or not, such hate speech remains protected under the 1st Amendment, so there’s no direct legal liability anyway. Removing Section 230 doesn’t change that. And, again, even if the content somehow reaches a level that it does break the law, the website cannot be held liable for it under the 1st Amendment unless they had knowledge that it was illegal.

Getting rid of Section 230 makes things worse, not better.

Second, Biden is simply lying when he says it’s a “special immunity for social media.” It is not. He’s wrong. Very wrong. Section 230 protects smaller companies way more than it helps the big companies, and it protects users and their own speech way more than it protects any company (by enabling sites to host third party content in the first place). Indeed, getting rid of it would do more to harm the most marginalized than protect them.

It’s bizarre that the President still gets this so wrong.

Finally, on the claims of “transparency,” once again, this is extreme ignorance. Forcing websites to be transparent about their content moderation practices makes it harder to stop malicious actors because you’re giving them the roadmap to how to game the systems. It also makes it that much more difficult for websites to adjust and adapt to the dynamic and ever-changing methods of malicious actors (i.e. those wishing to spread hate on the platform).

So, both of these proposals would almost certainly increase the amount of hateful speech online. And I know that people in the Biden administration know this. And yet they let the President continue to spread this counterproductive nonsense.

It’s all really too bad. A summit like this is a good thing. Countering hatred and violence is a good thing. Many of the programs announced at the summit sound quite helpful.

But the attacks on the 1st Amendment and tech (ironically at the same time that so many tech companies announced new programs) is not just silly, it’s actively counterproductive to the overall goal.

Filed Under: 1st amendment, content moderation, hatred, joe biden, section 230, summit, transparency, violence

codeSpark’s mission is to help all kids learn to code by igniting their curiosity in computer science and turning programming into play. The app is designed to teach kids 4 to 9 the foundations of computer science through puzzles, coding challenges, and creative tools. It’s a great way for your kid to learn how to code, and it has no ads or in-game purchases. Kids learn concepts such as: sequencing, loops, conditional statements, events, Boolean logic and sorting, and, variables (coming soon). Get 3 months of unlimited access for 2 accounts for $18.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

When a proposed new law is sold as “protecting kids online,” regulators and commenters often accept the sponsors’ claims uncritically (because… kids). This is unfortunate because those bills can harbor ill-advised policy ideas. The California Age-Appropriate Design Code (AADC / AB2273, just signed by Gov. Newsom) is an example of such a bill. Despite its purported goal of helping children, the AADC delivers a “hidden” payload of several radical policy ideas that sailed through the legislature without proper scrutiny. Given the bill’s highly experimental nature, there’s a high chance it won’t work the way its supporters think–with potentially significant detrimental consequences for all of us, including the California children that the bill purports to protect.

In no particular order, here are five radical policy ideas baked into the AADC:

Permissioned innovation. American business regulation generally encourages “permissionless” innovation. The idea is that society benefits from more, and better, innovation if innovators don’t need the government’s approval.

The AADC turns this concept on its head. It requires businesses to prepare “impact assessments” before launching new features that kids are likely to access. Those impact assessments will be freely available to government enforcers at their request, which means the regulators and judges are the real audience for those impact assessments. As a practical matter, given the litigation risks associated with the impact assessments, a business’ lawyers will control those processes–with associated delays, expenses, and prioritization of risk management instead of improving consumer experiences.

While the impact assessments don’t expressly require government permission to proceed, they have some of the same consequences. They put the government enforcer’s concerns squarely in the room during the innovation development (usually as voiced by the lawyers), they encourage self-censorship by the business if they aren’t confident that their decisions will please the enforcers, and they force businesses to make the cost-benefit calculus before the business has gathered any market feedback through beta or A/B tests. Obviously, these hurdles will suppress innovations of all types, not just those that might affect children. Alternatively, businesses will simply route around this by ensuring their features aren’t available at all to children–one of several ways the AADC will shrink the Internet for California children.

Also, to the extent that businesses are self-censoring their speech (and my position is that all online “features” are “speech”) because of the regulatory intervention, then permissioned innovation raises serious First Amendment concerns.

Disempowering parents. A foundational principle among regulators is that parents know their children best, so most children protection laws center around parental decision-making (e.g. COPPA).The AADC turns that principle on its head and takes parents completely out of the equation. Even if parents know their children best, per the AADC, parents have no say at all in the interaction between a business and their child. In other words, despite the imbalance in expertise, the law obligates businesses, not parents, to figure out what’s in the best interest of children. Ironically, the bill cites evidence that “In 2019, 81 percent of voters said they wanted to prohibit companies from collecting personal information about children without parental consent” (emphasis added), but then the bill drafters ignored this evidence and stripped out the parental consent piece that voters assumed. It’s a radical policy for the AADC to essentially tell parents “tough luck” if parents don’t like the Internet that the government is forcing on their children.

Fiduciary obligations to a mass audience. The bill requires businesses to prioritize the best interests of children above all else. For example: “If a conflict arises between commercial interests and the best interests of children, companies should prioritize the privacy, safety, and well-being of children over commercial interests.” Although the AADC doesn’t use the term “fiduciary” obligations, that’s functionally what the law creates. However, fiduciary obligations are typically imposed in 1:1 circumstances, like a lawyer representing a client, where the professional can carefully consider and advise about an individual’s unique needs. It’s a radical move to impose fiduciary obligations towards millions of individuals simultaneously, where there is no individual considerations at all.

The problems with this approach should be immediately apparent. The law treats children as if they all have the same needs and face the same risks, but “children” are too heterogeneous to support such stereotyping. Most obviously, the law lumps together 17 year-olds and 2 year-olds, even though their risks and needs are completely different. More generally, consumer subpopulations often have conflicting needs. For example, it’s been repeatedly shown that some social media features provide net benefit to a majority or plurality of users, but other subcommunities of minors don’t benefit from those features. Now what? The business is supposed to prioritize the best interests of “children,” but the presence of some children who don’t benefit indicates that the business has violated its fiduciary obligation towards that subpopulation, and that creates unmanageable legal risk–despite the many other children who would benefit. Effectively, if businesses owe fiduciary obligation to diverse populations with conflicting needs, it’s impossible to serve that population at all. To avoid this paralyzing effect, services will screen out children entirely.

Normalizing face scans. Privacy advocates actively combat the proliferation of face scanning because of the potentially lifelong privacy and security risks created by those scans (i.e., you can’t change your face if the scan is misused or stolen). Counterproductively, this law threatens to make face scans a routine and everyday occurrence. Every time you go to a new site, you may have to scan your face–even at services you don’t yet know if you can trust. What are the long-term privacy and security implications of routinized and widespread face scanning? What does that do to people’s long-term privacy expectations (especially kids, who will infer that face scans just what you do)? Can governments use the face scanning infrastructure to advance interests that aren’t in the interests of their constituents? It’s radical to motivate businesses to turn face scanning of children into a routine activity–especially in a privacy bill.

(Speaking of which–I’ve been baffled by the low-key response of the privacy community to the AADC. Many of their efforts to protect consumer privacy won’t likely matter in the long run if face scans are routine).

Frictioned Internet navigation. The Internet thrives in part because of the “seamless” nature of navigating between unrelated services. Consumers are so conditioned to expect frictionless navigation that they respond poorly when modest barriers are erected. The Ninth Circuit just explained:

The time it takes for a site to load, sometimes referred to as a site’s “latency,” is critical to a website’s success. For one, swift loading is essential to getting users in the door…Swift loading is also crucial to keeping potential site visitors engaged. Research shows that sites lose up to 10% of potential visitors for every additional second a site takes to load, and that 53% of visitors will simply navigate away from a page that takes longer than three seconds to load. Even tiny differences in load time can matter. Amazon recently found that every 100 milliseconds of latency cost it 1% in sales.

After the AADC, before you can go to a new site, you will have to do either face scanning or upload age authenticating documents. This adds many seconds or minutes to the navigation process, plus there’s the overall inhibiting effects of concerns about privacy and security. How will these barriers change people’s web “surfing”? I expect it will fundamentally change people’s willingness to click on links to new services. That will benefit incumbents–and hurt new market entrants, who have to convince users to do age assurance before users trust them. It’s radical for the legislature to make such a profound and structural change to how people use and enjoy an essential resource like the Internet.

A final irony. All new laws are essentially policy experiments, and the AADC is no exception. But to be clear, the AADC is expressly conducting these experiments on children. So what diligence did the legislature do to ensure the “best interest of children,” just like it expects businesses to do post-AADC? Did the legislature do its own impact assessment like it expects businesses to do? Nope. Instead, the AADC deploys multiple radical policy experiments without proper diligence and basically hopes for the best for children. Isn’t it ironic?

I’ll end with a shoutout to the legislators who voted for this bill: if you didn’t realize how the bill was packed with radical policy ideas when you voted yes, did you even do your job?

Filed Under: ab 2273, age appropriate design code, california, face scans, fiduciary duty, for the children, gavin newsom, parents, permissionless innovation, protect the children

We’ve already noted how Netflix’s password sharing crackdown is a dumb cash grab. The company already cordons users off into pay tiers based on a number of different criteria, including how many simultaneous streams a single account can already use at one time. And it just got done imposing a major price hike on most of its subscribers, with more on the way.

Then the company started to see actual competition in the streaming space. Wall Street doesn’t much care that the market has changed, and, as Wall Street always does, demands quarter-over-quarter growth at any cost. So Netflix developed an ingenious plan to nickel-and-dime existing users with additional fees if Netflix determines passwords are being shared outside of the home.

The change hasn’t come to the U.S. yet, but it’s expected to soon. Enforcement overseas has been a bit of a mess, with inconsistent billing, enforcement, and messaging to consumers. It’s all a giant headache for a “problem” Netflix used to make clear wasn’t actually a problem:

Love is sharing a password.

Enter Adobe, which apparently thinks it can help Netflix crack down on this nonexistent menace with machine learning systems that study user behavior in intricate detail. To sell Netflix on the idea, Adobe apparently claims that Netflix is suffering somewhere around $9 billion annually in potential losses due to password sharing, which Adobe more clinically dubs “credential sharing”:

Adobe prefers the term ‘credential sharing’ to ‘password piracy’ but doesn’t downplay its implications. Citing a 2020 study, Adobe says that up to 46 million people in the U.S. could be accessing streaming services with credentials that aren’t theirs while paying nothing for the privilege.

Citing potential losses of $9bn per year – three times those of rival Disney+ – Adobe says Netflix suffers most from credential sharing. The company believes that if streaming video is to avoid the fate of streaming music where free content is expected, action is needed sooner rather than later.

I don’t know where Adobe is getting the insane $9 billion estimate from. Other analysts like Cowen and Co have suggested that Netflix stands to make $1.6 billion extra annually from a password sharing crackdown, and even that seems generous.

One, such analysis doesn’t factor in that Netflix is already monetizing password sharing through limits on concurrent streams. Two, users are already facing blanket price hikes and will not be responsive to new hikes, no matter how cleverly they’re messaged. And three, the Cowen estimate believes that half of all password sharers would sign up for a new account, which is overly generous.

Other analysts are highly skeptical that Netflix’s password crackdown pays significant dividends at all:

Benchmark Co. analyst Matthew Harrigan, in a note last week, expressed skepticism that it would be a “growth game-changer,” opining that the strategy “cannibalizes full-ride member growth.” He pegged the incremental revenue lift at less than 4% revenue, even with generous assumptions about how many piggybackers Netflix might be able to convert to Extra Member accounts.

There’s a real risk that Netflix only annoys customers with greater fees and restrictions at a time they’re already losing customers and facing more streaming competition than ever. And with Wall Street demanding growth at any costs, there’s a high chance Netflix pushes its luck on both password sharing and finding annoying ways to monetize the account data Adobe is collecting.

If you’re noticing a lot of egomania, fuzzy numbers, and wishful thinking on Netflix’s part, that’s because as Netflix has shifted from innovation to turf protection, it joined the Motion Picture Association and adopted much of the broader cable, broadcast, and entertainment industry’s (sometimes facts-optional) rhetoric… especially when it comes to the diabolical menace that is password sharing.

Filed Under: cable tv, password sharing, streaming, video Companies: netflix

This feature is only available to registered users. You can register here or sign in to use it.